386 research outputs found

    Inside the German Miracle. How trade unions shape the future of industrial working conditions

    Get PDF
    Low unemployment rates during the crisis and a quick recovery are the main characteristics of the «German miracle». The general role of social partners and in particular the role of trade unions in the affected sectors appear to be undervalued especially in macro-level economic analysis. This article provides an insight into the activities of the metalworkers union (IG Metall) in developing strategies for innovative and sustainable solutions to improve future production and working conditions

    Integration of Leaky-Integrate-and-Fire-Neurons in Deep Learning Architectures

    Full text link
    Up to now, modern Machine Learning is mainly based on fitting high dimensional functions to enormous data sets, taking advantage of huge hardware resources. We show that biologically inspired neuron models such as the Leaky-Integrate-and-Fire (LIF) neurons provide novel and efficient ways of information encoding. They can be integrated in Machine Learning models, and are a potential target to improve Machine Learning performance. Thus, we derived simple update-rules for the LIF units from the differential equations, which are easy to numerically integrate. We apply a novel approach to train the LIF units supervisedly via backpropagation, by assigning a constant value to the derivative of the neuron activation function exclusively for the backpropagation step. This simple mathematical trick helps to distribute the error between the neurons of the pre-connected layer. We apply our method to the IRIS blossoms image data set and show that the training technique can be used to train LIF neurons on image classification tasks. Furthermore, we show how to integrate our method in the KERAS (tensorflow) framework and efficiently run it on GPUs. To generate a deeper understanding of the mechanisms during training we developed interactive illustrations, which we provide online. With this study we want to contribute to the current efforts to enhance Machine Intelligence by integrating principles from biology

    How deep is deep enough? -- Quantifying class separability in the hidden layers of deep neural networks

    Full text link
    Deep neural networks typically outperform more traditional machine learning models in their ability to classify complex data, and yet is not clear how the individual hidden layers of a deep network contribute to the overall classification performance. We thus introduce a Generalized Discrimination Value (GDV) that measures, in a non-invasive manner, how well different data classes separate in each given network layer. The GDV can be used for the automatic tuning of hyper-parameters, such as the width profile and the total depth of a network. Moreover, the layer-dependent GDV(L) provides new insights into the data transformations that self-organize during training: In the case of multi-layer perceptrons trained with error backpropagation, we find that classification of highly complex data sets requires a temporal {\em reduction} of class separability, marked by a characteristic 'energy barrier' in the initial part of the GDV(L) curve. Even more surprisingly, for a given data set, the GDV(L) is running through a fixed 'master curve', independently from the total number of network layers. Furthermore, applying the GDV to Deep Belief Networks reveals that also unsupervised training with the Contrastive Divergence method can systematically increase class separability over tens of layers, even though the system does not 'know' the desired class labels. These results indicate that the GDV may become a useful tool to open the black box of deep learning

    The stochastic resonance model of auditory perception:A unified explanation of tinnitus development, Zwicker tone illusion, and residual inhibition

    Get PDF
    Stochastic resonance (SR) has been proposed to play a major role in auditory perception, and to maintain optimal information transmission from the cochlea to the auditory system. By this, the auditory system could adapt to changes of the auditory input at second or even sub-second timescales. In case of reduced auditory input, somatosensory projections to the dorsal cochlear nucleus would be disinhibited in order to improve hearing thresholds by means of SR. As a side effect, the increased somatosensory input corresponding to the observed tinnitus-associated neuronal hyperactivity is then perceived as tinnitus. In addition, the model can also explain transient phantom tone perceptions occurring after ear plugging, or the Zwicker tone illusion. Vice versa, the model predicts that via stimulation with acoustic noise, SR would not be needed to optimize information transmission, and hence somatosensory noise would be tuned down, resulting in a transient vanishing of tinnitus, an effect referred to as residual inhibition.</p

    Sparsity through evolutionary pruning prevents neuronal networks from overfitting

    Get PDF
    Modern Machine learning techniques take advantage of the exponentially rising calculation power in new generation processor units. Thus, the number of parameters which are trained to resolve complex tasks was highly increased over the last decades. However, still the networks fail - in contrast to our brain - to develop general intelligence in the sense of being able to solve several complex tasks with only one network architecture. This could be the case because the brain is not a randomly initialized neural network, which has to be trained by simply investing a lot of calculation power, but has from birth some fixed hierarchical structure. To make progress in decoding the structural basis of biological neural networks we here chose a bottom-up approach, where we evolutionarily trained small neural networks in performing a maze task. This simple maze task requires dynamical decision making with delayed rewards. We were able to show that during the evolutionary optimization random severance of connections lead to better generalization performance of the networks compared to fully connected networks. We conclude that sparsity is a central property of neural networks and should be considered for modern Machine learning approaches
    • …
    corecore